234 research outputs found

    Transform Ranking: a New Method of Fitness Scaling in Genetic Algorithms

    Get PDF
    The first systematic evaluation of the effects of six existing forms of fitness scaling in genetic algorithms is presented alongside a new method called transform ranking. Each method has been applied to stochastic universal sampling (SUS) over a fixed number of generations. The test functions chosen were the two-dimensional Schwefel and Griewank functions. The quality of the solution was improved by applying sigma scaling, linear rank scaling, nonlinear rank scaling, probabilistic nonlinear rank scaling, and transform ranking. However, this benefit was always at a computational cost. Generic linear scaling and Boltzmann scaling were each of benefit in one fitness landscape but not the other. A new fitness scaling function, transform ranking, progresses from linear to nonlinear rank scaling during the evolution process according to a transform schedule. This new form of fitness scaling was found to be one of the two methods offering the greatest improvements in the quality of search. It provided the best improvement in the quality of search for the Griewank function, and was second only to probabilistic nonlinear rank scaling for the Schwefel function. Tournament selection, by comparison, was always the computationally cheapest option but did not necessarily find the best solutions

    Kaluza-Klein 5D Ideas Made Fully Geometric

    Full text link
    After the 1916 success of General relativity that explained gravity by adding time as a fourth dimension, physicists have been trying to explain other physical fields by adding extra dimensions. In 1921, Kaluza and Klein has shown that under certain conditions like cylindricity (gij/x5=0\partial g_{ij}/\partial x^5=0), the addition of the 5th dimension can explain the electromagnetic field. The problem with this approach is that while the model itself is geometric, conditions like cylindricity are not geometric. This problem was partly solved by Einstein and Bergman who proposed, in their 1938 paper, that the 5th dimension is compactified into a small circle S1S^1 so that in the resulting cylindric 5D space-time R4×S1R^4\times S^1 the dependence on x5x^5 is not macroscopically noticeable. We show that if, in all definitions of vectors, tensors, etc., we replace R4R^4 with R4×S1R^4\times S^1, then conditions like cylindricity automatically follow -- i.e., these conditions become fully geometric.Comment: 14 page

    Interval rational = algebraic

    Full text link

    Subsquares Approach - Simple Scheme for Solving Overdetermined Interval Linear Systems

    Full text link
    In this work we present a new simple but efficient scheme - Subsquares approach - for development of algorithms for enclosing the solution set of overdetermined interval linear systems. We are going to show two algorithms based on this scheme and discuss their features. We start with a simple algorithm as a motivation, then we continue with a sequential algorithm. Both algorithms can be easily parallelized. The features of both algorithms will be discussed and numerically tested.Comment: submitted to PPAM 201

    Estimating Probability of Failure of a Complex System Based on Inexact Information about Subsystems and Components, with Potential Applications to Aircraft Maintenance

    Get PDF
    In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component A, we know the probability P(A) of its failure with some uncertainty: e.g., we know the lower and upper bounds P(A) and P(A) for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe several methods for solving this problem, including a new efficient method for such estimation based on Cauchy deviates

    Which Distributions (or Families of Distributions) Best Represent Interval Uncertainty: Case of Permutation-Invariant Criteria

    Get PDF
    In many practical situations, we only know the interval containing the quantity of interest, we have no information about the probability of different values within this interval. In contrast to the cases when we know the distributions and can thus use Monte-Carlo simulations, processing such interval uncertainty is difficult -- crudely speaking, because we need to try all possible distributions on this interval. Sometimes, the problem can be simplified: namely, it is possible to select a single distribution (or a small family of distributions) whose analysis provides a good understanding of the situation. The most known case is when we use the Maximum Entropy approach and get the uniform distribution on the interval. Interesting, sensitivity analysis -- which has completely different objectives -- leads to selection of the same uniform distribution. In this paper, we provide a general explanation of why uniform distribution appears in different situations -- namely, it appears every time we have a permutation-invariant objective functions with the unique optimum. We also discuss what happens if there are several optima

    Gaussian and Cauchy Functions in the Filled Function Method – Why and What Next: On the Example of Optimizing Road Tolls

    Get PDF
    Abstract: In many practical problems, we need to find the values of the parameters that optimize the desired objective function. For example, for the toll roads, it is important to set the toll values that lead to the fastest return on investment. There exist many optimization algorithms, the problem is that these algorithms often end up in a local optimum. One of the promising methods to avoid the local optima is the filled function method, in which we, in effect, first optimize a smoothed version of the objective function, and then use the resulting optimum to look for the optimum of the original function. It turns out that empirically, the best smoothing functions to use in this method are the Gaussian and the Cauchy functions. In this paper, we show that from the viewpoint of computational complexity, these two smoothing functions are indeed the simplest. The Gaussian and Cauchy functions are not a panacea: in some cases, they still leave us with a local optimum. In this paper, we use the computational complexity analysis to describe the next-simplest smoothing functions which are worth trying in such situations. Keywords: optimization; toll roads; filled function method; Gaussian and Cauchy smoothin

    In system identification, interval (and fuzzy) estimates can lead to much better accuracy than the traditional statistical ones: General algorithm and case study

    Full text link
    In many real-life situations, we know the upper bound of the measurement errors, and we also know that the measurement error is the joint result of several independent small effects. In such cases, due to the Central Limit Theorem, the corresponding probability distribution is close to Gaussian, so it seems reasonable to apply the standard Gaussian-based statistical techniques to process this data - in particular, when we need to identify a system. Yes, in doing this, we ignore the information about the bounds, but since the probability of exceeding them is small, we do not expect this to make a big difference on the result. Surprisingly, it turns out that in some practical situations, we get a much more accurate estimates if we, vice versa, take into account the bounds - and ignore all the information about the probabilities. In this paper, we explain the corresponding algorithms. and we show, on a practical example, that using this algorithm can indeed lead to a drastic improvement in estimation accuracy. © 2017 IEEE
    corecore